58 research outputs found

    Real-Time Recommendation of Streamed Data

    Get PDF
    This tutorial addressed two trending topics in the field of recommender systems research, namely A/B testing and real-time recommendations of streamed data. Focusing on the news domain, participants learned how to benchmark the performance of stream-based recommendation algorithms in a live recommender system and in a simulated environment

    User effort vs. accuracy in rating-based elicitation

    Get PDF
    One of the unresolved issues when designing a recommender system is the number of ratings -- i.e., the profile length -- that should be collected from a new user before providing recommendations. A design tension exists, induced by two conflicting requirements. On the one hand, the system must collect "enough"ratings from the user in order to learn her/his preferences and improve the accuracy of recommendations. On the other hand, gathering more ratings adds a burden on the user, which may negatively affect the user experience. Our research investigates the effects of profile length from both a subjective (user-centric) point of view and an objective (accuracy-based) perspective. We carried on an offline simulation with three algorithms, and a set of online experiments involving overall 960 users and four recommender algorithms, to measure which of the two contrasting forces influenced by the number of collected ratings -- recommendations relevance and burden of the rating process -- has stronger effects on the perceived quality of the user experience. Moreover, our study identifies the potentially optimal profile length for an explicit, rating based, and human controlled elicitation strategy

    Benchmarking news recommendations: the CLEF NewsREEL use case

    Get PDF
    The CLEF NewsREEL challenge is a campaign-style evaluation lab allowing participants to evaluate and optimize news recommender algorithms. The goal is to create an algorithm that is able to generate news items that users would click, respecting a strict time constraint. The lab challenges participants to compete in either a "living lab" (Task 1) or perform an evaluation that replays recorded streams (Task 2). In this report, we discuss the objectives and challenges of the NewsREEL lab, summarize last year's campaign and outline the main research challenges that can be addressed by participating in NewsREEL 2016

    User-Item Reciprocity in Recommender Systems: Incentivizing the Crowd

    Get PDF
    Data consumption has changed significantly in the last 10 years. The digital revolution and the Internet has brought an abundance of information to users. Recommender systems are a popular means of finding content that is both relevant and personalized. However, today’s users require better recommender systems, able of producing continuous data feeds keeping up with their instantaneous and mobile needs. The CrowdRec project addresses this demand by providing context-aware, resource-combining, socially-informed, interactive and scalable recommendations. The key insight of CrowdRec is that, in order to achieve the dense, high-quality, timely information required for such systems, it is necessary to move from passive user data collection, to more active techniques fostering user engagement. For this purpose, CrowdRec activates the crowd, soliciting input and feedback from the wider communit

    Tuning Word2vec for Large Scale Recommendation Systems

    Full text link
    Word2vec is a powerful machine learning tool that emerged from Natural Lan-guage Processing (NLP) and is now applied in multiple domains, including recom-mender systems, forecasting, and network analysis. As Word2vec is often used offthe shelf, we address the question of whether the default hyperparameters are suit-able for recommender systems. The answer is emphatically no. In this paper, wefirst elucidate the importance of hyperparameter optimization and show that un-constrained optimization yields an average 221% improvement in hit rate over thedefault parameters. However, unconstrained optimization leads to hyperparametersettings that are very expensive and not feasible for large scale recommendationtasks. To this end, we demonstrate 138% average improvement in hit rate with aruntime budget-constrained hyperparameter optimization. Furthermore, to makehyperparameter optimization applicable for large scale recommendation problemswhere the target dataset is too large to search over, we investigate generalizinghyperparameters settings from samples. We show that applying constrained hy-perparameter optimization using only a 10% sample of the data still yields a 91%average improvement in hit rate over the default parameters when applied to thefull datasets. Finally, we apply hyperparameters learned using our method of con-strained optimization on a sample to the Who To Follow recommendation serviceat Twitter and are able to increase follow rates by 15%.Comment: 11 pages, 4 figures, Fourteenth ACM Conference on Recommender System

    Performance models for hierarchical grid architectures

    No full text

    Time-evolution of IPTV recommender systems

    No full text

    Controlling Consistency in Top-N Recommender Systems

    No full text

    Performance models for desktop grids

    No full text
    Main characteristics of desktop grids are the large number of nodes and their heterogeneity. Application speedup on a large-scale desktop grid is limited by the heterogeneous computational capabilities of each node, which increase the synchronization overhead, and by the large number of nodes, that results in the serial fraction dominating performance. In this paper we present an innovative technique which may outperform the throughput of traditional grid applications by merging job partitioning and job replication. We utilize ordered statistics analytical models for the performance analysis of desktop–based grid applications. The models describe the effects of resource heterogeneity, serial fraction and synchronization overheads on the application– level performance. Using the models we show how the proposed policies can be tuned with respect to the size of the grid in order to optimize the grid throughput.
    • …
    corecore